Transactional memory

Transactional memory attempts to simplify parallel programming by allowing a group of load and store instructions to execute in an atomic way. It is a concurrency control mechanism analogous to database transactions for controlling access to shared memory in concurrent computing.

Contents

Hardware vs. software transactional implementations

Hardware transactional memory systems may comprise modifications in processors, cache and bus protocol to support transactions.[1][2][3][4][5]

Software transactional memory provides transactional memory semantics in a software runtime library or the programming language,[6] and requires minimal hardware support (typically an atomic compare and swap operation, or equivalent).

Load-link/store-conditional (LL/SC) offered by many RISC processors can be viewed as the most basic transactional memory support. However, LL/SC usually operates on data that is the size of a native machine word.

Motivation

The motivation of transactional memory lies in the programming interface of parallel programs. The goal of a transactional memory system is to transparently support the definition of regions of code that are considered a transaction, that is, that have atomicity, consistency and isolation requirements. Transactional memory allows writing code like this example:

def transfer_money(from, to, amount):
    transaction:
        from = from - amount
        to = to + amount

In the code, the block defined by "transaction" has the atomicity, consistency and isolation guarantees and the underlying transactional memory implementation must assure those guarantees transparently.

Implementations

References

External links